38 research outputs found

    Validation of AI-based Information Systems for Sensitive Use Cases: Using an XAI Approach in Pharmaceutical Engineering

    Get PDF
    Artificial Intelligence (AI) is adopted in many businesses. However, adoption lacks behind for use cases with regulatory or compliance requirements, as validation and auditing of AI is still unresolved. AI's opaqueness (i.e., "black box") makes the validation challenging for auditors. Explainable AI (XAI) is the proposed technical countermeasure that can support validation and auditing of AI. We developed an XAI based validation approach for AI in sensitive use cases that facilitates the understanding of the system's behaviour. We conducted a case study in pharmaceutical manufacturing where strict regulatory requirements are present. The validation approach and an XAI prototype were developed through multiple workshops and was then tested and evaluated with interviews. Our approach proved suitable to collect the required evidence for a software validation, but requires additional efforts compared to a traditional software validation. AI validation is an iterative process and clear regulations and guidelines are needed

    Changes of health-related quality of life within the 1st year after stroke – results from a prospective stroke cohort study

    Get PDF
    Introduction: As prospective data on long-term patient-reported outcome measures (PROMs) to assess Health related Quality of Life (HRQoL) after stroke are still scarce, this study examined the long-term course of PROMs and investigated influential factors such as recanalization therapies. Materials and Methods: A total of 945 (mean age 69 years; 56% male) stroke patients were enrolled with a personal interview and chart review performed at index event. One hundred forty (15%) patients received thrombolysis (IVT) and 53 (5%) patients received endovascular therapy (ET) or both treatments as bridging therapy (BT). After 3 and 12 months, a follow-up was conducted using a postal questionnaire including subjective quality of life EQ-5D-5L (European Quality of Life 5 Dimensions). At all time-points, Modified Rankin Scale (mRS) was additionally used to quantify functional stroke severity. Differences between therapy groups were identified using post-hoc-tests. Linear and logistic regression analyses were used to identify predictors of outcomes. Results: Recanalization therapies were associated with significant improvements of NIHSS (National Institutes of Health Stroke Scale [regression coefficient IVT 1.21 (p = 0.01) and ET/BT 7.6; p = 0.001] and mRS (modified Rankin Scale) [regression coefficient IVT 0.83; p = 0.001 and ET/BT 2.0; p = 0.001] between admission and discharge compared to patients with stroke unit therapy only, with a trend toward improvement of EQ-5D after 12 months [regression coefficient 4.67 (p = 0.17)] with IVT. HRQoL was considerably impaired by stroke and increased steadily in 3- and 12-months follow-up in patients with (mean EQ-5D from 56 to 68) and without recanalization therapy (mean EQ-5D from 62 to 68). In severe strokes a major and significant improvement was only detected during period of 3 to 12 months (p = 0.03 in patients with and p = 0.005 in patients without recanalization therapy). Conclusions: Despite significant and continuous improvements after stroke the HRQoL after 12 months remained below the age-matched general population, but was still unexpectedly high in view of the accumulation of permanent disabilities in up to 30% of the patients. Especially in severe strokes, it is important to evaluate HRQoL beyond a 3-months follow-up as improvements became significant only between 3 months and 1 year

    Collaborative analysis of multi-gigapixel imaging data using Cytomine

    Get PDF
    Motivation: Collaborative analysis of massive imaging datasets is essential to enable scientific discoveries. Results: We developed Cytomine to foster active and distributed collaboration of multidisciplinary teams for large-scale image-based studies. It uses web development methodologies and machine learning in order to readily organize, explore, share and analyze (semantically and quantitatively) multi-gigapixel imaging data over the internet. We illustrate how it has been used in several biomedical applications

    Subcutaneous fat patterning in athletes: selection of appropriate sites and standardisation of a novel ultrasound measurement technique: ad hoc working group on body composition, health and performance, under the auspices of the IOC Medical Commission.

    Get PDF
    Background: Precise and accurate field methods for body composition analyses in athletes are needed urgently. Aim: Standardisation of a novel ultrasound (US) technique for accurate and reliable measurement of subcutaneous adipose tissue (SAT). Methods: Three observers captured US images of uncompressed SAT in 12 athletes and applied a semiautomatic evaluation algorithm for multiple SAT measurements. Results: Eight new sites are recommended: upper abdomen, lower abdomen, erector spinae, distal triceps, brachioradialis, lateral thigh, front thigh, medial calf. Obtainable accuracy was 0.2 mm (18 MHz probe; speed of sound: 1450 m/s). Reliability of SAT thickness sums (N=36): R2=0.998, SEE=0.55 mm, ICC (95% CI) 0.998 (0.994 to 0.999); observer differences from their mean: 95% of the SAT thickness sums were within ±1 mm (sums of SAT thicknesses ranged from 10 to 50 mm). Embedded fibrous tissues were also measured. Conclusions: A minimum of eight sites is suggested to accommodate inter-individual differences in SAT patterning. All sites overlie muscle with a clearly visible fascia, which eases the acquisition of clear images and the marking of these sites takes only a few minutes. This US method reaches the fundamental accuracy and precision limits for SAT measurements given by tissue plasticity and furrowed borders, provided the measurers are trained appropriately

    Segmentation and classification of colon glands with deep convolutional neural networks and total variation regularization

    No full text
    Segmentation of histopathology sections is a necessary preprocessing step for digital pathology. Due to the large variability of biological tissue, machine learning techniques have shown superior performance over conventional image processing methods. Here we present our deep neural network-based approach for segmentation and classification of glands in tissue of benign and malignant colorectal cancer, which was developed to participate in the GlaS@MICCAI2015 colon gland segmentation challenge. We use two distinct deep convolutional neural networks (CNN) for pixel-wise classification of Hematoxylin-Eosin stained images. While the first classifier separates glands from background, the second classifier identifies gland-separating structures. In a subsequent step, a figure-ground segmentation based on weighted total variation produces the final segmentation result by regularizing the CNN predictions. We present both quantitative and qualitative segmentation results on the recently released and publicly available Warwick-QU colon adenocarcinoma dataset associated with the GlaS@MICCAI2015 challenge and compare our approach to the simultaneously developed other approaches that participated in the same challenge. On two test sets, we demonstrate our segmentation performance and show that we achieve a tissue classification accuracy of 98% and 95%, making use of the inherent capability of our system to distinguish between benign and malignant tissue. Our results show that deep learning approaches can yield highly accurate and reproducible results for biomedical image analysis, with the potential to significantly improve the quality and speed of medical diagnoses.ISSN:2167-835

    IQM: an extensible and portable open source application for image and signal analysis in java

    Get PDF
    Image and signal analysis applications are substantial in scientific research. Both open source and commercial packages provide a wide range of functions for image and signal analysis, which are sometimes supported very well by the communities in the corresponding fields. Commercial software packages have the major drawback of being expensive and having undisclosed source code, which hampers extending the functionality if there is no plugin interface or similar option available. However, both variants cannot cover all possible use cases and sometimes custom developments are unavoidable, requiring open source applications. In this paper we describe IQM, a completely free, portable and open source (GNU GPLv3) image and signal analysis application written in pure Java. IQM does not depend on any natively installed libraries and is therefore runnable out-of-the-box. Currently, a continuously growing repertoire of 50 image and 16 signal analysis algorithms is provided. The modular functional architecture based on the three-tier model is described along the most important functionality. Extensibility is achieved using operator plugins, and the development of more complex workflows is provided by a Groovy script interface to the JVM. We demonstrate IQM’s image and signal processing capabilities in a proof-of-principle analysis and provide example implementations to illustrate the plugin framework and the scripting interface. IQM integrates with the popular ImageJ image processing software and is aiming at complementing functionality rather than competing with existing open source software. Machine learning can be integrated into more complex algorithms via the WEKA software package as well, enabling the development of transparent and robust methods for image and signal analysis
    corecore